Asynchronous Parallel Incremental Block-Coordinate Descent for Decentralized Machine Learning

نویسندگان

چکیده

Machine learning (ML) is a key technique for big-data-driven modelling and analysis of massive Internet Things (IoT) based intelligent ubiquitous computing. For fast-increasing applications data amounts, distributed promising emerging paradigm since it often impractical or inefficient to share/aggregate centralized location from distinct ones. This paper studies the problem training an ML model over decentralized systems, where are many user devices algorithm run on-device, with aim relaxing burden at central entity/server. Although gossip-based approaches have been used this purpose in different use cases, they suffer high communication costs, especially when number large. To mitigate this, incremental-based methods proposed. We first introduce incremental block-coordinate descent (I-BCD) ML, which can reduce costs expense running time. accelerate convergence speed, asynchronous parallel BCD (API-BCD) method proposed, multiple devices/agents active fashion. derive properties proposed methods. Simulation results also show that our API-BCD outperforms state art terms time costs.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Asynchronous Parallel Greedy Coordinate Descent

In this paper, we propose and study an Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) algorithm for minimizing a smooth function with bounded constraints. At each iteration, workers asynchronously conduct greedy coordinate descent updates on a block of variables. In the first part of the paper, we analyze the theoretical behavior of Asy-GCD and prove a linear convergence rate. In the...

متن کامل

An Asynchronous Parallel Stochastic Coordinate Descent Algorithm

We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate on functions that satisfy an essential strong convexity property and a sublinear rate (1/K) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of proces...

متن کامل

Asynchronous Decentralized Parallel Stochastic Gradient Descent

Recent work shows that decentralized parallel stochastic gradient decent (D-PSGD) can outperform its centralized counterpart both theoretically and practically. While asynchronous parallelism is a powerful technology to improve the efficiency of parallelism in distributed machine learning platforms and has been widely used in many popular machine learning softwares and solvers based on centrali...

متن کامل

DID: Distributed Incremental Block Coordinate Descent for Nonnegative Matrix Factorization

Nonnegative matrix factorization (NMF) has attracted much attention in the last decade as a dimension reduction method in many applications. Due to the explosion in the size of data, naturally the samples are collected and stored distributively in local computational nodes. Thus, there is a growing need to develop algorithms in a distributed memory architecture. We propose a novel distributed a...

متن کامل

Learning Output Kernels with Block Coordinate Descent

We propose a method to learn simultaneously a vector-valued function and a kernel between its components. The obtained kernel can be used both to improve learning performance and to reveal structures in the output space which may be important in their own right. Our method is based on the solution of a suitable regularization problem over a reproducing kernel Hilbert space of vector-valued func...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Big Data

سال: 2022

ISSN: ['2372-2096', '2332-7790']

DOI: https://doi.org/10.1109/tbdata.2022.3230335